Name | Version | Summary | date |
fluke-fl |
0.0.1 |
Federated Learning Utility framework for Experimentation |
2024-05-17 10:39:26 |
dooc |
0.0.1 |
Digtal Organoid On Chips |
2024-05-17 08:37:07 |
vector-quantize-pytorch |
1.14.24 |
Vector Quantization - Pytorch |
2024-05-17 03:51:36 |
magvit2-pytorch |
0.4.5 |
MagViT2 - Pytorch |
2024-05-16 23:59:29 |
vision-mamba |
0.1.0 |
Vision Mamba - Pytorch |
2024-05-16 18:16:57 |
meshgpt-pytorch |
1.2.7 |
MeshGPT Pytorch |
2024-05-16 12:38:37 |
llamafactory |
0.7.1 |
Easy-to-use LLM fine-tuning framework |
2024-05-16 10:32:02 |
EasyDeL |
0.0.65 |
An open-source library to make training faster and more optimized in Jax/Flax |
2024-05-16 09:39:43 |
albucore |
0.0.3 |
A high-performance image processing library designed to optimize and extend the Albumentations library with specialized functions for advanced image transformations. Perfect for developers working in computer vision who require efficient and scalable image augmentation. |
2024-05-16 02:11:07 |
lavis-gml |
1.0.2.post4 |
LAVIS - A One-stop Library for Language-Vision Intelligence |
2024-05-15 21:24:27 |
walledeval |
0.0.2.dev0 |
An open-source toolkit to test LLMs against jailbreaks and unprecedented harms. |
2024-05-15 15:19:21 |
pydlutils |
0.0.6 |
Utility library for deep learning |
2024-05-15 07:06:22 |
alphafold3 |
0.0.8 |
Paper - Pytorch |
2024-05-15 01:28:40 |
nvidia-cudnn-cu12 |
9.1.1.17 |
cuDNN runtime libraries |
2024-05-14 20:24:30 |
nvidia-cudnn-cu11 |
9.1.1.17 |
cuDNN runtime libraries |
2024-05-14 20:23:03 |
gpt4o |
0.0.1 |
gpt4o - Pytorch |
2024-05-14 20:17:36 |
FJFormer |
0.0.57 |
Embark on a journey of paralleled/unparalleled computational prowess with FJFormer - an arsenal of custom Jax Flax Functions and Utils that elevate your AI endeavors to new heights! |
2024-05-14 18:50:33 |
swarms |
5.0.1 |
Swarms - Pytorch |
2024-05-14 16:07:36 |
AdsbAnomalyDetector |
0.4.0 |
Low altitude aircraft anomaly detector |
2024-05-14 09:46:47 |
deepnccl |
2.1.0 |
DEEP-NCCL is an AI-Accelerator communication framework for NVIDIA-NCCL. It implements optimized all-reduce, all-gather, reduce, broadcast, reduce-scatter, all-to-all,as well as any send/receive based communication pattern.It has been optimized to achieve high bandwidth on aliyun machines using PCIe, NVLink, NVswitch,as well as networking using InfiniBand Verbs, eRDMA or TCP/IP sockets. |
2024-05-14 09:33:26 |